Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Mono Vs. Stereo Encoding? (Read 9487 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Mono Vs. Stereo Encoding?

I have some old operas from the 40s, 50s, and 60s, on CDs that were originially recorded in mono and I want to rip them to mp3. Would it make any sense to encode them in stereo (VBR joint stereo) if the original recording is OBVIOUSLY in mono? I've been contemplating whether or not to encode in VBR joint stereo like I do my other CDs, or just encode in VBR mono? I'd really like some advice on this. Thanks.

Mono Vs. Stereo Encoding?

Reply #1
It should make no difference for mono-sources, if you encode with Mono or Stereo VBR, because the encoder will "notice" that both channels have equal information and will encode the second channel very effective, so the bitrate for Mono encoding should be equal to the bitrate encoded with stereo.

You might want to encode to stereo/mono for compatibility reasons.
(old players, that cant play mono/stereo files for example).

Greetings, Primius

Mono Vs. Stereo Encoding?

Reply #2
Using Lame, you should encode in mono in this case. Lame has some built-in safety checks that will not allow the side channel to be totally starved. Joint stereo will then produce an higher bitrate than mono.

Mono Vs. Stereo Encoding?

Reply #3
I'm curious if they did any mono-> stereo tricks for the CD. I know how to check for this in Adobe Audition using the channel mixer. Unsure otherwise.

Mono Vs. Stereo Encoding?

Reply #4
I second Primius. Better to encode in js. After all, how much will the difference be?

Mono Vs. Stereo Encoding?

Reply #5
Well. Thanks for your opinions! I suppose I'll just encode in joint stereo VBR, thanks!

Mono Vs. Stereo Encoding?

Reply #6
Using Lame, you should encode in mono in this case. Lame has some built-in safety checks that will not allow the side channel to be totally starved. Joint stereo will then produce an higher bitrate than mono.
Say you use lame to encode a stereo song in mono (using -mm)
How is the stereo collapsed?  Does it use an average? A sum?  Is there a chance there could be phase cancelation?

Mono Vs. Stereo Encoding?

Reply #7
Using Lame, you should encode in mono in this case. Lame has some built-in safety checks that will not allow the side channel to be totally starved. Joint stereo will then produce an higher bitrate than mono.
Say you use lame to encode a stereo song in mono (using -mm)
How is the stereo collapsed?  Does it use an average? A sum?  Is there a chance there could be phase cancelation?

you must use -a if you want to downmix from stereo to mono

(edit - fixed quote name. again.)

Mono Vs. Stereo Encoding?

Reply #8
What happens otherwise? There was a topic containing similar discussion about these two options a while back; was it not determined that the inclusion of both options was to do with raw PCM input and the result of either would be the same? I apologise if I'm wrong.

Mono Vs. Stereo Encoding?

Reply #9
What happens otherwise? There was a topic containing similar discussion about these two options a while back; was it not determined that the inclusion of both options was to do with raw PCM input and the result of either would be the same? I apologise if I'm wrong.

if you only use -m m and the source wav is stereo, you will get a mp3 file twice as long, half speed, containing "two channels in one" 

J.M.

Mono Vs. Stereo Encoding?

Reply #10
if you only use -m m and the source wav is stereo, you will get a mp3 file twice as long, half speed, containing "two channels in one" 

J.M.

So -a makes it use some type of interpolation, then.
About my other questions (phase cancellation, etc.), what of them?

Mono Vs. Stereo Encoding?

Reply #11
Quote
' date='Aug 9 2006, 18:41' post='419244']
if you only use -m m and the source wav is stereo, you will get a mp3 file twice as long, half speed, containing "two channels in one" 

J.M.

So -a makes it use some type of interpolation, then.
About my other questions (phase cancellation, etc.), what of them?

If you use -a, it simply averages the stereo samples (out of phase samples are cancelled,...)

J.M.

Mono Vs. Stereo Encoding?

Reply #12
Right you may be - but I quote the lame documentation, specifically switchs.html:

Quote
* -a    downmix
    Mix the stereo input file to mono and encode as mono.
    The downmix is calculated as the sum of the left and right channel, attenuated by 6 dB.

    This option is only needed in the case of raw PCM stereo input (because LAME cannot determine the number of channels in the input file).
    To encode a stereo PCM input file as mono, use "lame -m s -a".

    For WAV and AIFF input files, using "-m m" will always produce a mono .mp3 file from both mono and stereo input.


Quote
mono
The input will be encoded as a mono signal. If it was a stereo signal, it will be downsampled to mono. The downmix is calculated as the sum of the left and right channel, attenuated by 6 dB.


What you have described must be some type of bug or exception; surely the developers would not let such an occurence take place otherwise.

Mono Vs. Stereo Encoding?

Reply #13
What you have described must be some type of bug or exception; surely the developers would not let such an occurence take place otherwise.

ok, I used DBoweramp frontend for LAME.exe, it must be that!

edit- yeah, it works fine with the -m m switch without the frontend

Mono Vs. Stereo Encoding?

Reply #14
In current version, mono downmix is simply done by averaging input samples, so yes, if you provide out of phase signals, they will cancel.

Mono Vs. Stereo Encoding?

Reply #15
... just like the "mono" switch on an amplifier

Mono Vs. Stereo Encoding?

Reply #16
When the input is a wav file, -a and -m m are similar.

Only when using raw input or pipes then -m x might be used to specify the input, and -m m is then different from -m s/j -a.

Mono Vs. Stereo Encoding?

Reply #17
Using Lame, you should encode in mono in this case. Lame has some built-in safety checks that will not allow the side channel to be totally starved. Joint stereo will then produce an higher bitrate than mono.

shouldn't this safety check be waived if the L and R channel are completely identical?
In this case the S channel consists of zeros only which losslessly (e.g. RLE) compress to virtually nothing.

(whereas lame3.98.4 [march 2010] converts my 10 seconds mono sample file to 189.974 bytes, and the same 10 seconds as dual mono (L=R) to 234.204 bytes (using joint stereo) that's a 23,3% increase)